Skip to main content

Selected Prompt Details

Once you select a prompt from the Prompt Gallery, you will be directed to a detailed view where you can interact with the prompt and customize its settings to meet your specific requirements. This page offers an interactive conversation-based interface that allows you to input data and receive real-time responses from the AI model.

Interaction Interface

The interaction interface for each prompt provides a structured environment where you can input your data or queries and see the model's responses instantly. For example, when selecting the Audio Diarization prompt, you can input audio files or text, and the model will process and return the output directly in the conversation thread.

Conversation Flow

  • User Input: You can input various types of data or queries, such as audio transcriptions, code snippets, or text for analysis.
  • Model Output: The model will process the input and generate an appropriate response, such as transcribing audio, reviewing code, or providing answers.
  • System Instructions: You can provide additional instructions on the tone or style of the AI's responses, helping to refine the output further.

The conversation steps are clearly labeled as User or Model, allowing you to easily follow the flow of interactions.


Customizing the Prompt Settings

On the right side of the page, you will find the Run Settings panel, where you can adjust various parameters to control the behavior of the AI model:

  • Model Selection: Choose the AI model you wish to use from a dropdown list, such as Gemini 1.5 Flash, depending on the task at hand.
  • Token Count: Set the maximum number of tokens the model can use for its output, allowing you to control the length of the response.
  • Temperature: Adjust the temperature setting to influence the creativity and randomness of the output. Higher values result in more diverse responses, while lower values provide more deterministic results.
  • JSON Mode: Enable this mode to process both input and output in JSON format, which is ideal for handling structured data exchanges.
  • Code Execution: Toggle this setting to enable or disable code execution within the prompt, allowing for more complex workflows, particularly when working with code-based prompts.
  • Safety Settings: Use these settings to ensure that the AI-generated output adheres to specific safety guidelines, helping control the tone and content of the response.

By adjusting these settings, you can fine-tune how the prompt processes input and generates responses, ensuring the output aligns with your specific project needs.


Saving and Running the Prompt

Once you have configured the settings, you can click the Run button to execute the prompt. The system will process the input based on the selected model and settings, and display the output in real time. If the configuration meets your expectations, you can save these settings for future use.

You can also save a copy of the conversation for further analysis or reuse in other workflows, allowing you to leverage past interactions for future projects.